Critical Care Explorations
○ Ovid Technologies (Wolters Kluwer Health)
Preprints posted in the last 7 days, ranked by how well they match Critical Care Explorations's content profile, based on 15 papers previously published here. The average preprint has a 0.02% match score for this journal, so anything above that is already an above-average fit.
Wiseman, J.; Sibley, S.; Perez-Patrigeon, S.; Mekhaeil, M.; Hanley, M.; Hunt, M.; Boyd, T.; Grant, B.; Boyd, J. G.
Show abstract
IntroductionThere is increasing interest in the peripheral administration of vasopressors for two main reasons: (1) to expedite vasopressor initiation in patients with refractory shock and (2) to avoid the potential complications associated with central venous catheter placement. The current evidence on the use of peripheral vasopressor administration is primarily based on single-center observational studies. There are inconsistencies in the administration of peripheral vasopressors, including catheter gauge and location, monitoring practices, vasopressor concentrations, and duration of use. This has made it difficult for institutions to develop best practice guidelines. A randomized controlled trial is needed to address this knowledge gap. Methods and analysisThe Peripheral Use of Low-dose Vasopressors for Safety and Efficacy (PULSE) in the intensive care unit is a prospective, unblinded feasibility study. Eligible patients will be 18 years or older, have no existing central venous catheter or peripherally inserted central catheter and have the presence of shock requiring a minimum vasopressor dose of any of the following: norepinephrine 0.0625 mcg/kg/min, phenylephrine 0.625 mcg/kg/min, and epinephrine 0.0625 mcg/kg/min. Fifty patients will be randomized 1:1 into either the peripheral venous catheter or central venous catheter group. The primary outcome is feasibility, defined as (1) a recruitment rate of 4 participants per month, (2) a data capture rate of [≥]90%, and (3) a <50% conversion rate from peripheral to central access. The secondary outcomes include the safety of peripheral vasopressor use, alive and central-line-free days, the number of attempts needed to place a catheter, volume status, in-hospital mortality rate, ICU and hospital length of stay, and patient-centred important outcomes. ImplicationsThe data collected from this study will inform the design of a definitive randomized controlled trial to assess the safety and efficacy of protocol-driven peripheral vasopressor administration. Ethics and disseminationThis study received approval (6042888) from the Queens University Health Sciences/Affiliated Teaching Hospitals Research Ethics Boards. Results of this study will be presented at critical care conferences and submitted for publication. Trial registration numberNCT06920173 (https://clinicaltrials.gov/study/NCT06920173).
Gjertsen, M.; Yoon, W.; Afshar, M.; Temte, B.; Leding, B.; Halliday, S.; Bradley, K.; Kim, J.; Mitchell, J.; Sanders, A. K.; Croxford, E. L.; Caskey, J.; Churpek, M. M.; Mayampurath, A.; Gao, Y.; Miller, T.; Kruser, J. M.
Show abstract
Importance: Physicians routinely prognosticate to guide care delivery and shared decision making, particularly when caring for patients with critical illnesses. Yet, these physician estimates are prone to inaccuracy and uncertainty. Artificial intelligence, including large language models (LLMs), show promise in supporting or improving this prognostication. However, the performance of contemporary LLMs in prognosticating for the heterogeneous population of critically ill patients remains poorly understood. Objective: To characterize and compare the performance of LLMs and physicians when predicting 6-month mortality for hospitalized adults who survived critical illness. Design: Embedded mixed methods study with elicitation and comparison of prognostic estimates and reasoning from LLMs and practicing physicians. Setting: The publicly available, deidentified Medical Information Mart for Intensive Care (MIMIC)-IV v2.2 dataset. Participants: We randomly selected 100 hospitalizations of adult survivors of critical illness. Four contemporary LLMs (Open AI GPT-4o, o3- and o4-mini, and DeepSeek-R1) and 7 physicians provided independent prognostic estimates for each case (1,100 total estimates; 400 LLM and 700 physician). Main outcomes and measures: For each case, LLMs and physicians used the hospital discharge summary and demographics to predict 6-month mortality (yes/no) and provide their reasoning (free text). We assessed prognostic performance using accuracy, sensitivity, and specificity, and used inductive, qualitative content analysis to characterize reasonings. Results: Mean physician accuracy for predicting mortality was 70.1% (95% CI 63.7-76.4%), with sensitivity of 59.7% (95% CI 50.6-68.8%) and specificity of 80.6% (95% CI 71.7-88.2%). The top-performing LLM (OpenAI o4-mini) accuracy was 78.0% (95% CI 70.0-86.0%), with sensitivity of 80.0% (95% CI 67.4-90.2%) and specificity of 76.0% (95% CI 63.3-88.0%). The difference between mean physician and top-performing LLM accuracy was not statistically significant (p = 0.5). Qualitative analysis revealed similar patterns in LLM and physician expressed reasoning, except that physicians regularly and explicitly reported uncertainty while LLMs did not. Conclusion and Relevance: In this study, LLMs and physicians achieved comparable, moderate performance in predicting 6-month mortality after critical illness, with similar patterns in expressed reasoning. Our findings suggest LLMs could be used to support prognostication in clinical practice but also raise safety concerns due to the lack of LLM uncertainty expression.
Bider-Lunkiewicz, J.; Gasciauskaite, G.; Rück Perez, B.; Braun, J.; Willms, J.; Szekessy, H.; Nöthiger, C.; Hoffmann, M.; Milovanovic, P.; Keller, E.; Tscholl, D. W.
Show abstract
PurposeThis study evaluates the Visual Hemofilter, a novel decision-support and information transfer tool designed to assist with regional citrate anticoagulation (RCA) in hemofiltration. By representing hemofilter parameters and patient blood constituents as animated icons, the tool aims to improve clinicians interpretation of blood gas results and RCA reference tables. We hypothesized that the Visual Hemofilter would enhance clinical decision-making by enabling faster and more accurate therapy adjustments, increasing clinicians confidence in their decisions, and reducing cognitive workload compared to conventional methods. MethodsWe conducted a prospective, randomized, computer-based simulation study across four intensive care units at the University Hospital Zurich. Twenty-six critical care professionals participated, each managing regional citrate anticoagulation (RCA) scenarios using either the Visual Hemofilter or conventional methods involving blood gas analysis and reference tables. Following each scenario, participants made therapy adjustments and rated their decision confidence and cognitive workload. ResultsUse of the Visual Hemofilter significantly improved decision accuracy (odds ratio [OR] 3.96; 95% CI 2.03-7.73; p < 0.0001) and reduced decision time by an average of 33 seconds (mean difference -33.3 seconds; 95% CI -39.4 to -27.2; p < 0.0001). Participants also reported greater confidence in their decisions (OR 5.41; 95% CI 2.49-11.77; p < 0.0001) and experienced lower cognitive workload (mean difference -15.05 points on the NASA-TLX scale (National Aeronautics and Space Administration-Task Load Index); 95% CI -18.99 to -11.13; p < 0.0001). ConclusionsThe Visual Hemofilter enhances clinical decision-making in RCA by increasing accuracy and speed, boosting decision confidence, and reducing cognitive workload. This technology has the potential to reduce errors and better support critical care professionals in managing complex treatment scenarios.
Than, M.; Pickering, J. W.; Joyce, L. R.; Buchan, V. A.; Florkowski, C. M.; Mills, N. L.; Hamill, L.; Prystowsky, J.; Harger, S.; Reed, M.; Bayless, J.; Feberwee, A.; Attenburrow, T.; Norman, T.; Welfare, O.; Heiden, T.; Kavsak, P.; Jaffe, A. S.; apple, f.; Peacock, W. F.; Cullen, L.; Aldous, S.; Richards, A. M.; Lacey, C.; Troughton, R.; Frampton, C.; Body, R.; Mueller, C.; Lord, S. J.; George, P. M.; Devlin, G.
Show abstract
BACKGROUND Point-of-care (POC) high-sensitivity cardiac troponin (hs-cTn) testing has the potential to expedite decision-making and reduce emergency department (ED) length of stay for patients presenting with possible myocardial infarction (MI) by ensuring that results are consistently available when looked for by clinicians. We assessed the real-life effectiveness and safety of implementing POC hs-cTn testing in the ED. METHODS We conducted a pragmatic, stepped-wedge cluster randomized trial. The control arm was usual care with an accelerated diagnostic pathway utilizing a single-sample rule-out step with a central laboratory hs-cTn assay. The intervention arm used the same pathway with a POC hs-cTnI. The primary effectiveness outcome was ED length of stay assessed using a generalized linear mixed model, and the safety outcome was 30-day MI or cardiac death. RESULTS Six sites participated with 59,980 ED presentations (44,747 individuals, 61{+/-}19 years, 49.5% female) from February 2023 to January 2025, in which 31,392 presentations were during the intervention arm. After adjustment for co-variates associated with length of stay, the intervention reduced length of stay by 13% (95% confidence intervals [CI], 9 to 16%. P<0.001), corresponding to a reduction of 47 minutes (95%CI, 33 to 61 minutes) from a mean length of stay in the control arm of 376 minutes. The 30-day MI or cardiac death rate was similar in the control and intervention arms (0.39% and 0.39% respectively, P=0.54). CONCLUSIONS Implementation of whole-blood hs-cTnI testing at the POC into an accelerated diagnostic pathway was safe and reduced length of stay in the ED compared with laboratory testing.
Walters, R.; Allen, M. B.; Scheen, H.; Beam, C.; Waldrip, Z.; Singule-Kollisch, M.; Varisco, A.; Williams, J. G.; De Luca, D.; Varisco, B. M.
Show abstract
BackgroundIn patients requiring respiratory support, clinicians rely on physical exam, radiologic, laboratory, and ventilator-derived measures for the provision of sufficient support while minimizing ventilator and "work of breathing" induced lung injury. Point of care lung ultrasound (LUS) is a widely available tool in hospital and clinic environments. To date, LUS has not been used to evaluate lung strain. MethodsWe collected LUS images in four anesthetized, neuromuscularly blocked, and mechanically ventilated pigs being used for another experiment. A feature tracking tool was developed which tracked echo-bright lung structures in ten second clips obtained in triplicate of the right and left, upper and lower lung fields using tidal volumes of 4, 6, 8, 10, and 12 mL/kg. Pleural lines were manually drawn and a program for quantifying lung strain developed with assistance from Anthropic Claude Artificial Intelligence tool. Structures were identified in inspiratory and expiratory frames and tracked bidirectionally with median strain per frame used for calculations. ResultsTriplicate measures of lung ultrasound images in four pigs had a median coefficients of variation of 35% (23-47% IQR) and linear modeling of strain with tidal volumes of 4-12 mL/kg showed positive correlation with R2 value ranging from 0.89 to 0.97. Strain measurements were similar after bronchial administration of 1.5M hydrochloric acid. ConclusionsRegional lung strain quantification using LUS is a viable and potentially useful tool for respiratory support management.
Houle, T. T.; Lebowitz, A.; Chtay, I.; Patel, T.; McGeary, D. D.; Turner, D. P.
Show abstract
ImportanceMigraine attacks often occur unpredictably, limiting the ability of individuals to initiate timely preventive or preemptive treatment. Short-term probabilistic forecasting of migraine risk could enable more targeted management strategies. ObjectiveTo externally validate the previously developed Headache Prediction Model (HAPRED-I), evaluate an updated continuously learning model (HAPRED-II), and assess the feasibility and short-term safety of delivering individualized probabilistic migraine forecasts directly to patients. Design, Setting, and ParticipantsProspective 8-week cohort study conducted remotely at two academic medical centers in the United States (Massachusetts General Hospital and Wake Forest Health Sciences) between 2015 and 2019. Adults with recurrent migraine or tension-type headache completed twice-daily electronic diaries. A total of 230 participants contributed 23,335 diary entries across 11,862 participant-days of observation. Main Outcomes and MeasuresOccurrence of a headache attack within 24 hours following each evening diary entry. Model performance was evaluated using discrimination (area under the receiver operating characteristic curve [AUC]) and calibration. ResultsExternal validation of HAPRED-I demonstrated modest discrimination (AUC, 0.59; 95% CI, 0.57-0.61) and poor calibration, with predicted probabilities consistently exceeding observed headache risk. In contrast, the continuously updating HAPRED-II model demonstrated progressive improvement in predictive performance as participant-specific data accumulated. Discrimination increased from an AUC of 0.59 (95% CI, 0.57-0.61) during the first 14 days to 0.66 (95% CI, 0.63-0.70) after the first month, accompanied by improved calibration across predicted risk levels. Over the study period, 6999 individualized forecasts were delivered directly to participants. No evidence suggested that receipt of forecasts was associated with increasing headache frequency or worsening predicted headache risk trajectories. Conclusions and RelevanceA static migraine forecasting model demonstrated limited transportability to new individuals. In contrast, models that continuously update within individuals may improve predictive accuracy over time and enable real-time delivery of personalized migraine risk forecasts. Further work incorporating richer physiologic and contextual predictors will likely be necessary before such systems can reliably guide clinical treatment decisions.
Martin, C. M.; henderson, i.; Campbell, D.; Stockman, K.
Show abstract
Background: The instability-plasticity framework proposes that multimorbidity trajectories periodically enter instability phases that are vulnerable to escalation but also potentially modifiable through relational intervention. Whether such phases commonly resolve without acute care, or predominantly progress to hospitalisation, has not been quantified at scale. Objective: To quantify instability window outcomes across a longitudinal monitoring cohort; to test whether the characteristics distinguishing admitted from resolved windows reflect within-patient trajectory dynamics or between-patient severity; and to characterise which patient-reported and operator-rated signals reliably precede admission, using both a curated pilot sub-cohort and the full monitoring cohort with an explicit cross-cohort comparison. Methods: Two complementary analyses were conducted on data from the MonashWatch Patient Journey Record (PaJR) relational telehealth system. Instability windows were identified algorithmically (>=2 consecutive calls with Total_Alerts >=3) across the full longitudinal dataset (16,383 calls, 244 patients, 2.5 years) and classified by linkage to ED and hospital admission data. Window characteristics were compared at window, patient, and paired within-patient levels. Pre-admission signal cascades were analysed in two configurations: a curated pilot sub-cohort (64 patients, 280 calls, +/-10-day window, 103 admissions, December 2016-September 2017) and the full monitoring cohort (175 patients, 1,180 pre-admission calls, +/-14-day window, December 2016-July 2019). A three-way cross-cohort comparison decomposed differences between the two configurations into pipeline and population effects. Results: 621 instability windows were identified across 157 patients (64% of the monitored cohort). 67.3% resolved without hospital admission or ED attendance, a rate stable across alert thresholds 1-5. In paired within-patient analysis (n = 70), duration in days (p = 0.002) and multi-domain breadth (p < 0.001) distinguished admitted from resolved windows; alert intensity did not. In the pilot sub-cohort, patient-reported illness prognosis (Q21) was the dominant pre-admission signal (GEE beta = +0.058, AUC = 0.647, p-BH = 0.018). This finding did not replicate in the full cohort: Q21 was non-significant (GEE beta = -0.008, p = 0.154, AUC = 0.507). Cross-cohort analysis identified selective curation of the pilot sub-cohort as the primary explanation. In the full cohort, six signals escalated significantly before admission after Benjamini-Hochberg correction: total alerts, health impairment (Q26), red alerts, self-rated health (Q3), patient concerns (Q1), and operator concern (Q34). Health impairment achieved the highest individual AUC (0.605) and showed the longest pre-admission lead. No individual signal exceeded AUC 0.61. Conclusions: Two thirds of instability phases resolve without hospitalisation, providing direct empirical support for trajectory plasticity as a clinically frequent phenomenon. Within the same patient, persistence - in duration and in the consistency of high-severity multi-domain flagging across calls - distinguishes trajectories that tip into admission from those that resolve. The Q21 signal reversal between cohorts illustrates how selective curation can produce compelling but non-replicable findings in monitoring research. In the full population, objective alert signals and operator judgement, rather than patient illness prognosis, carry the pre-admission signal
German Mesner, I.; Lake, D. E.; Kausch, S. L.; Krahn, K. N.; Gummadi, A.; Clark, T. W.; Niestroy, J. C.; Sahni, R.; Vesoulis, Z. A.; Gootenberg, D. B.; Ambalavanan, N.; Travers, C. P.; Fairchild, K. D.; Sullivan, B. A.
Show abstract
Premature very low birth weight (VLBW) infants have high rates of mortality and morbidity from sepsis, necrotizing enterocolitis, and respiratory failure requiring intubation and mechanical ventilation. Earlier detection of cardiorespiratory deterioration using vital signs from continuous physiological monitoring may lead to more timely interventions and improved outcomes. To further this research area, we present PreMo, a publicly available dataset of continuous heart rate and oxygen saturation, demographics, clinical events, and outcomes for 3,829 VLBW patients from four Neonatal Intensive Care Units (NICUs) in the United States. The PreMo dataset consists of a collection of parquet files, RO-Crate metadata, and sample usage code scripts hosted on the University of Virginia LibraData Dataverse website.
Morgan, C.; Calder, A.; Brugha, R.; Quyam, S.; Aurora, P.; McGovern, E.; Bush, A.; Moledina, S.
Show abstract
BackgroundTBX4 variants are a recognised cause of paediatric pulmonary hypertension (PH), often associated with interstitial lung disease (ILD). Evidence for ILD-directed therapy in this group is lacking. MethodsWe conducted a retrospective study of children ([≤]18 years) with TBX4-associated PH at a national centre (2001-2025). ILD was defined using ChILD-EU criteria. Patients treated with pulsed intravenous methylprednisolone were assessed for response using ChILD-EU categories. Secondary outcomes included respiratory severity score (RSS), functional class (FC), echocardiographic measures, and NT-proBNP. ResultsOf 21 children, 11 (52%) had ILD; 9 received corticosteroids. Median age at treatment was 0.8 years. A clear or best response occurred in 7/9 (78%). RSS improved in 6/9 (p=0.02), with all children on respiratory support showing partial or complete weaning. Functional class improved in all with FC III/IV at baseline (p=0.02). Right ventricular function improved (TAPSE z-score +1.65, p=0.04), and elevated NT-proBNP normalised. Key clinical milestones included ECMO weaning, transplant delisting, and discontinuation of prostacyclin therapy. No significant adverse effects were observed. Untreated children showed no early improvement. ConclusionsCorticosteroids were associated with meaningful improvements in respiratory and PH outcomes in TBX4-associated PH with ILD. Prospective evaluation is warranted.
Auger, S. D.; Varley, J.; Hargovan, M.; Scott, G.
Show abstract
Background: Current medical large language model (LLM) evaluations largely rely on small collections of cases, whereas rigorous safety testing requires large-scale, diverse, and complex cases with verifiable ground truth. Multiple Sclerosis (MS) provides an ideal evaluation model, with validated diagnostic criteria and numerous paraclinical tests informing differential diagnosis, investigation, and management. Methods: We generated synthetic MS cases with ground-truth labels for diagnosis, localisation, and management. Four frontier LLMs (Gemini 3 Pro/Flash, GPT 5.2/5 mini) were instructed to analyse cases to provide anatomical localisation, differential diagnoses, investigations, and management plans. An automated evaluator compared these outputs to the ground-truth labels. Blinded subspecialty experts validated 70 cases for realism and automated evaluator accuracy. We then evaluated LLM decision-making across 1,000 cases and scaled to 10,000 to characterise rare, catastrophic failures. Results: Subspecialist expert review confirmed 100% synthetic case realism and 99.8% (95% CI 95.5 to 100) automated evaluation accuracy. Across 1,000 generated MS cases, all LLMs successfully included MS in the differential diagnoses for more than 91% cases. However, diagnostic competence did not associate with treatment safety. Gemini 3 models had low rates of clinically appropriate steroid recommendations (Flash: 7.2% 95% CI 5.6 to 8.8; Pro: 15.8% 95% CI 13.6 to 18.1) compared to GPT 5 mini (23.5% 95% CI 20.8 to 26.1), frequently overlooking contraindications like active infection. OpenAI models inappropriately recommended acute intravenous thrombolysis for MS cases (9.6% GPT 5.2; 6.4% GPT 5 mini) compared to below 1% for Gemini models. Expanded evaluation (to 10,000 cases) probed these errors in detail. Thrombolysis was recommended in 10.1% of cases lacking symptom timing information and paradoxically persisted (2.9%) even when symptoms were explicitly documented as more than 14 days old. Conclusion: Automated expert-level evaluation across 10,000 cases characterised artificial intelligence clinical blind spots hitherto invisible to small-scale testing. Massive-scale simulation and automated interrogation should become standard for uncovering serious failures and implementing safety guardrails before clinical deployment exposes patients to risk.
Khorsand, B.; Teichrow, D.; Lipton, R. B.; Ezzati, A.
Show abstract
ObjectiveTo describe the design, feasibility, and baseline characteristics of the Migraine Impact on Neurocognitive Dynamics (MIND) study, a 30-day smartphone-based cohort for high-frequency assessment of cognition and symptoms in adults with migraine. BackgroundCognitive symptoms are an important component of migraine burden, but they are difficult to measure using single-visit testing or retrospective questionnaires. Repeated smartphone-based assessment may better capture real-world variability in cognition and symptoms. MethodsAdults meeting International Classification of Headache Disorders, 3rd edition, criteria for migraine were enrolled remotely and completed 30 days of once-daily ecological momentary assessments and mobile cognitive tasks delivered through the Mobile Monitoring of Cognitive Change platform. Baseline measures assessed demographics, migraine characteristics, disability, mood, stress, and treatment patterns. Feasibility was evaluated using enrollment, completion, and retention metrics. ResultsA total of 177 participants enrolled (mean age 38.8 {+/-} 11.9 years; 79.7% female), including 80/177 (45.2%) with chronic migraine. Across the 30-day protocol, 3688 daily assessments were completed, representing 70.8% of all possible study days, and 70.6% of participants completed at least 20 days of monitoring. Completion remained above 60% across study days. At baseline, chronic migraine was associated with greater burden than low-frequency and high-frequency episodic migraine, including higher MIDAS scores (98.6 vs. 38.7 and 70.3), more days with concentration difficulty (16.0 vs. 7.9 and 11.5), and more days with functional interference (18.5 vs. 7.6 and 13.0). ConclusionsThe MIND study demonstrates the feasibility of high-frequency smartphone-based assessment of cognition and symptoms in migraine and provides a methodological foundation for future analyses of within-person cognitive and symptom dynamics across the migraine cycle.
Sankaranarayanan, M.; Donahue, M. A.; Brooks, J. D.; Sun, S.; Newhouse, J. P.; Blacker, D.; Haneuse, S.; Hernandez-Diaz, S.; Moura, L. M. V. R.
Show abstract
ObjectiveLevetiracetam is commonly prescribed for seizure prophylaxis after acute ischemic stroke (AIS) and often continued beyond discharge. While its short-term effectiveness for preventing post-stroke seizures is established, it is unclear whether prolonged use improves survival, particularly in older adults. We estimated the effect of continued levetiracetam use on 90-day mortality among Medicare beneficiaries after AIS. MethodsUsing Traditional Medicare claims data (2008-2021), we identified beneficiaries aged [≥]66 years hospitalized for AIS who initiated outpatient levetiracetam within 90 days of discharge. After one month of continued post-stroke use of levetiracetam (start of follow-up), we compared 90-day mortality between patients with a new levetiracetam dispensation within a 14-day grace period post-follow up and those without one. We performed cloning, censoring and weighting to address immortal time bias and estimated standardized mortality risks, risk differences, and 95% confidence intervals (CI). ResultsAmong 3,212 eligible beneficiaries, 1,779 (55.4%) received a new levetiracetam dispensation within the 14-day grace period. Median age was 76 years (IQR 70-83); 57.8% were female. After adjustment for demographics, hospitalization characteristics, timing of initiation, and comorbidities, continued use was associated with lower 90-day mortality than discontinuation (53 vs 62 deaths per 1,000; risk difference -9 per 1,000; 95% CI: (-12,-5)). The reduction was observed primarily among patients aged [≥]75 years. SignificanceAmong older Medicare beneficiaries who initiated levetiracetam after AIS, continued outpatient use was associated with modestly lower 90-day mortality, particularly in those aged [≥]75 years. These findings suggest potential benefits of levetiracetam continuation beyond the immediate post-stroke period.
Pozo, M.; Pape, A.; Locke, B.; Pettine, W. W.
Show abstract
Timely identification of intensive care unit (ICU) patients likely to exit the unit can support anticipatory workflows such as chart review, eligibility screening, and patient outreach prior to transfer. Most ICU discharge prediction studies report discrimination and calibration, but these metrics do not quantify the decision consequences of acting on predictions. Using adult ICU admissions from MIMIC-IV, we represented each ICU stay as a sequence of daily clinical summaries and trained logistic regression, random forest, and XGBoost models to predict next day ICU transfer. Models achieved ROC AUC of 0.80-0.84 with differing calibration. We evaluated decision utility using decision curve analysis (DCA), where positive predictions trigger proactive review. Across thresholds, model guided strategies outperformed review-all, review-none, and a simple clinical rule. To translate net benefit into implementable operations, we modeled a clinical trial recruitment workflow with an 8 hour daily time constraint, incorporating chart review and consent effort. At a feasible operating threshold (0.23), the model flagged [~]23 charts/day and yielded [~]1.23 enrollments/day under conservative eligibility and consent assumptions. These results demonstrate that DCA provides a transparent framework for determining when ICU transfer predictions are worth using and how thresholds should be selected to align with real world workflow constraints. Data and Code AvailabilityThis research has been conducted using data from MIMIC-IV. Researchers can request access via PhysioNet. Implementation code is available upon request.
Natarajan, T.; Kim, J. H.; Salgado, C. D.; Jha, A.; Baker, C.; Sellers, S. L.; Aslan, J. E.; Hinds, M. T.; Yoganathan, A. P.; Dasi, L. P.
Show abstract
BackgroundTranscatheter aortic valve replacement has transformed the management of aortic stenosis; however, adverse outcomes such as leaflet thrombosis and hypoattenuating leaflet thickening remain clinically significant concerns. Flow disturbances resulting from valve canting may alter local hemodynamics and promote thrombogenic conditions. We investigated how modest transcatheter heart valve canting alters cusp-specific sinus flow and washout and promotes localized thrombogenic microenvironments associated with leaflet surface thrombus formation using particle image velocimetry, a physiologic blood loop, and tissue analysis. MethodsA patient-derived aortic root model was used to evaluate the hemodynamic and thrombogenic effects of THV canting at -10{degrees} (anti-curvature), 0{degrees} (neutral), and +10{degrees} (along-curvature). High-resolution particle image velocimetry quantified sinus flow fields and washout characteristics, and complementary whole-blood loop experiments enabled histologic assessment of leaflet-associated thrombus formation. ResultsCanting redistributed systolic jet orientation and sinus recirculation in a direction-dependent manner while preserving global hemodynamic measurements. The most spatially constrained cusp showed the largest increase in stasis and the slowest washout. In the right coronary cusp, anti-curvature canting increased the fraction of sinus area with velocity magnitude <0.05 m/s to 92% versus 43% in neutral and 10% in along-curvature deployments, and prolonged neo-sinus (T90) washout to 4.7 cycles versus 2.9 and 1.8 cycles, respectively. Histology localized surface-adherent platelet/fibrin thrombus to these poorly washed regions, most prominently on the right coronary cusp leaflet in anti-curvature deployments. Left and noncoronary cusp responses shifted with tilt direction, indicating redistribution rather than uniform worsening of thrombogenic conditions. ConclusionsEven modest noncoaxial deployment is sufficient to create sinus-resolved throm-bogenic microenvironments that are not captured by global gradient or effective orifice area. Deployment configuration is therefore a modifiable determinant of post-TAVR leaflet throm-bosis risk and may contribute to HALT.
Carlquist, J.; Scott, S. S.; Wright, J. C.; Jianing, M.; Peng, J.; Mokadam, N. A.; Whitson, B. A.; Smith, S.
Show abstract
PurposeObstructive sleep apnea (OSA) is a common comorbidity in heart failure (HF) patients with prevalence increasing as HF severity worsens. While CPAP/BiPAP has been shown to reduce disease burden and mortality in the general HF population, it is unclear whether these benefits extend to patients with left ventricular assist devices (LVADs). We sought to determine whether OSA affects long-term survival in newly implanted LVAD patients and whether CPAP/BiPAP treatment confers mortality benefits. MethodsThis single-center retrospective study included patients who underwent LVAD implantation between January 2007 and February 2022. Recipients were stratified by OSA status (OSA vs No-OSA), and those with OSA were further categorized based on CPAP/BiPAP compliance. Comparative statistics and Kaplan-Meier survival analyses were performed, with log-rank tests used to compare groups and assess survival differences. A Cox proportional hazards model was conducted to evaluate the association between risk factors and survival among patients with OSA and No-OSA. ResultsBefore LVAD implantation, patients with OSA had higher body mass index, hypertension, and a higher rate of implantable cardioverter-defibrillator placement than those without OSA. OSA was not associated with increased postoperative complications. Although survival did not differ significantly between OSA and No-OSA patients (p=0.33), CPAP/BiPAP-compliant OSA patients had significantly better survival than noncompliant patients (p=0.0099). ConclusionsLVAD patients with OSA who consistently use CPAP/BiPAP have better survival than those who do not. CPAP/BiPAP is a simple, low-risk treatment that can reduce mortality in this population. Therefore, increased perioperative screening for OSA should be considered for patients receiving LVADs. Multicenter studies are needed to confirm our findings further.
Benchimol-Barbosa, P. R.; Loayza-Benchimol-Barbosa, A. C.; Carvalhaes, C. G.; Kantharia, B. K.
Show abstract
AbstractO_ST_ABSBackgroundC_ST_ABSLeft ventricular (LV) remodeling in chronic Chagas cardiomyopathy (CCC) is progressive, but whether population-level LV mass dynamics follow nonlinear patterns and whether the loss of dynamic complexity tracks mortality is unknown. MethodsFifty outpatients from SEARCH-Rio cohort were followed-up for 10 years. Serial echocardiography provided paired LV mass measurements fitted to the logistic equation x = {middle dot}x{middle dot}(1-{gamma}{middle dot}x). Lyapunov exponents (LE) were computed from consecutive inter-patient derivatives. A clinical risk score was developed using Firth penalized logistic regression with bootstrap validation (B = 1,000) and Cox-Firth sensitivity analysis. ResultsLV mass remodeling adjusted the logistic equation with = 3.91 {+/-} 0.18 and {gamma} = 1.27 {+/-} 0.06, which was compatible with dynamics near the complexity threshold ( {approx} 3.57). Survivors showed positive LE (+0.339 {+/-} 0.543), and nonsurvivors showed negative LE (-0.825 {+/-} 0.972; p = 0.015). The fixed-point equilibrium of {approx} 280 g was approached by 63 % of patients at follow-up (p = 0.0003), a pattern indistinguishable from regression to the mean in the present design. Firth regression identified EF < 51.7 % and maximum heart rate < 109 bpm as independent predictors (optimism-corrected AUC = 0.959); the derived score showed a monotonic mortality gradient accompanied by lower LE across strata (Spearman {rho} = -0.369, p = 0.004). ConclusionsThese exploratory findings are compatible with nonlinear LV mass remodeling in Chagas disease and the association between loss of dynamic complexity and mortality. Replication in larger cohorts, formal model comparisons, and prospective validation of the score are warranted. Nonstandard Abbreviations and AcronymsASE, American Society of Echocardiography; AUC, area under the receiver operating characteristic curve; CCC, chronic Chagas cardiomyopathy; CHF, congestive heart failure; CI, confidence interval; EF, ejection fraction; HRV, heart rate variability; IQR, interquartile range; IVS, interventricular septum; LA, left atrial; LAHB, left anterior hemiblock; LBBB, left bundle branch block; LE, Lyapunov exponent; LV, left ventricular; LVEDD, left ventricular end-diastolic diameter; LVESD, left ventricular end-systolic diameter; NSVT, nonsustained ventricular tachycardia; NYHA, New York Heart Association; OR, odds ratio; PSVT, paroxysmal supraventricular tachycardia; PVC, premature ventricular complex; PW, posterior wall; RBBB, right bundle branch block; ROC, receiver operating characteristic; SAECG, signal-averaged electrocardiogram; SD, standard deviation; SDNN, standard deviation of normal-to-normal intervals; SEM, standard error of the mean; SVE, supraventricular ectopy
Crystal, O.; Farina, J. M. M.; Scalia, I. G.; Ayoub, C.; Park, H.-B.; Kim, K. A.; Arsanjani, R.; Lester, S. J.; Banerjee, I.
Show abstract
BackgroundAccurate assessment of left ventricular outflow tract (LVOT) gradients is critical for hypertrophic cardiomyopathy (HCM) management, yet Doppler-based measurements are technically demanding and require expertise. ObjectiveTo develop a multi-view deep learning model capable of classifying LVOT obstruction (> 20mmHg) using routine 2D echocardiographic windows without reliance on Doppler imaging. MethodsWe trained and externally validated a cross-attention-based video-to-video fusion framework that integrated EchoPrime-derived video representations from three standard transthoracic echocardiographic views to classify LVOT gradients. ResultsTraining was performed on a derivation cohort (N = 1833) from a tertiary care system in the United States, with model performance evaluated on an internal held-out test set (N = 275) and a Korean external validation cohort (N = 46). Single-view baselines showed limited discrimination (external AUROCs 0.47-0.70). Conversely, domain-specific foundational model (EchoPrime) achieved superior single-view performance (AUROCs 0.75-0.80 internal; 0.79-0.83 external), highlighting the importance of echo-specific pretraining and temporal modeling. The proposed multi-view fusion further enhanced predictive performance, with the late fusion model reaching an AUROC of 0.84 on the external cohort with significant population-shift. ConclusionsThese results suggest LVOT physiology is encoded in routine 2D imaging and can be leveraged for clinically relevant gradient classification without Doppler input- proposed AI-guided strategy demonstrates substantial cost savings compared with the screen-all approach. By integrating complementary spatial-temporal information across multiple views, our approach generalizes robustly across populations and may enable real-time decision support, extend LVOT assessment to portable or resource-limited settings, and complement Doppler-based evaluation for longitudinal HCM management.
Chen, Y.; Law, Z. K.; Zhou, X.; Dai, Q.; Xiang, S.; Xiao, X.; Ma, J.; Feng, M.; Peng, W.; Zhou, S.; Chen, L.; Zhou, Y.; Lai, Y.; Yeo, L.; An, S.; He, Y.; Pan, S.-Y.
Show abstract
Abstract Objective: To compare the safety and efficacy of bridging intravenous thrombolysis (IVT) plus endovascular thrombectomy (EVT) versus direct EVT in patients with acute ischemic stroke (AIS) due to anterior circulation large vessel occlusion (LVO) treated within the 6- to 24-hour time window. Methods: This is a retrospective analysis of prospective EVT registry from 10 comprehensive stroke centers in China and Singapore between 2019 and 2024. Eligible patients had anterior circulation LVO, underwent EVT within 6-24 hours of onset, had ASPECTS 6, NIHSS 6, and pre-stroke mRS 2. Patients were stratified into bridging IVT + EVT (IVT group) versus direct EVT alone (non-IVT group). Propensity score matching (1:2 ratio) was performed to balance baseline covariates. The primary outcome was 3-month favorable functional outcome (mRS 0-2). Secondary outcomes included successful recanalization (mTICI 2b-3), symptomatic intracranial hemorrhage (sICH), hemorrhagic transformation (HT) and 3-month mortality. In the matched cohort, binary outcomes were compared using the Cochran-Mantel-Haenszel test. Results: Of 772 included patients, 110 (14.2%) received bridging IVT and 662 (85.8%) received direct EVT. After propensity score matching, 202 non-IVT patients were matched to 101 IVT patients, with all covariates well-balanced (absolute SMD <0.10). In the matched cohort, bridging IVT was not associated with a significant difference in 3-month favorable outcome (44.55% vs. 47.03%; common OR 0.91; 95% CI 0.56-1.46), successful recanalization (91.09% vs. 90.10%; OR 1.11; 0.51-2.44), sICH (5.94% vs. 9.41%; OR 0.61; 0.24-1.58), HT (23.76% vs. 23.27%; OR 1.03; 0.57-1.85), or 3-month mortality (15.84% vs. 13.37%; OR 1.22; 0.62-2.37). Conclusion: In this large multicenter propensity score-matched analysis, bridging intravenous thrombolysis before endovascular thrombectomy in the 6- to 24-hour time window was not significantly associated with improved efficacy or increased safety risks compared with direct endovascular therapy alone.
Werner, C. J.; Sanchez-Garcia, E.; Mall, B.; Meyer, T.; Pinho, J.; Schulz, J. B.; Schumann-Werner, B.
Show abstract
Multi-consistency testing during flexible endoscopic evaluation of swallowing (FEES) is clinically necessary but introduces selection bias: worst scores inflate severity because the number of consistencies tested covaries with disease severity. In this retrospective observational study of hospitalized neurological patients, we derived and validated the FEES Dysphagia Index (FDI) in two temporally independent cohorts (Cohort 1: 2013-2018, N=1,257; Cohort 2: 2021-2025, N=1,686) from a single center. FDI-S averages Penetration-Aspiration Scale (PAS) scores across tested consistencies (0-100 scale); FDI-E uses Yale Pharyngeal Residue scores; FDI-C combines both. Selection bias was quantified using sequential branching-tree inverse probability weighting (IPW). Worst PAS overestimated severity by 24%; FDI deviated by <2%. FDI-C was significantly superior to Worst PAS for hospital-acquired pneumonia (HAP; AUC 0.70 vs. 0.60, p<0.001), mortality (0.71 vs. 0.62, p=0.040), and restricted oral intake (0.90 vs. 0.74, p<0.001), and statistically equivalent to clinician-rated severity. FDI-C mapped linearly onto ordinal Functional Oral Intake Scale values (FOIS; proportional odds RCS p=0.99). With functional status and diagnosis, FDI-C reconstructed the clinicians oral intake recommendation with AUC up to 0.93. The FDI-C-mortality relationship was sigmoidal with a clinically relevant transition zone between [~]50 and [~]85. FDI-C is a bias-resilient, bedside-calculable score with interval-scale properties that captures expert clinical judgment, suitable as both a clinical decision support tool and a continuous research endpoint.
Nkosi-Mjadu, B. E.
Show abstract
BackgroundSouth Africas public healthcare system serves most of the population through approximately 3,900 primary healthcare clinics characterised by long waiting times and high volumes of repeat-prescription visits. No published pre-arrival digital triage system operates across all 11 official South African languages while aligning with the South African Triage Scale (SATS). This paper reports the design and preliminary safety validation of BIZUSIZO, a hybrid deterministic-AI WhatsApp triage system. MethodsBIZUSIZO delivers SATS-aligned triage via WhatsApp, combining AI-assisted free-text classification (Claude Haiku 4.5) with a Deterministic Clinical Safety Layer (DCSL) that overrides AI output for 53 clinical discriminator categories (14 RED, 19 ORANGE, 20 YELLOW) coded in all 11 official languages and independent of AI availability. A five-domain risk factor assessment can only upgrade triage level. One hundred and twenty clinical vignettes in patient language (English, isiZulu, isiXhosa, Afrikaans; 30 per language) were scored against a developer-assigned gold standard with independent blinded nurse review. A 121-vignette multilingual DCSL safety consistency check across all 11 languages and a 220-call post-hoc framing sensitivity evaluation (110 paired vignettes) were also conducted. ResultsUnder-triage was 3.3% (4/120; 95% CI: 0.9%-8.3%) with no RED under-triage; exact concordance was 80.0% (96/120) and quadratic weighted kappa 0.891 (95% CI: 0.827-0.932). One two-level under-triage was observed on a non-RED presentation (V072, isiXhosa burns vignette, ORANGEGREEN); one two-level over-triage was observed (V054, isiZulu deep laceration, YELLOWRED). In the framing sensitivity evaluation, AI-only classification achieved 50.9% RED invariance under adversarial framing; full-pipeline classification achieved 95.0% in four validated languages, with the DCSL rescuing 18 of 23 AI drift cases. ConclusionsA hybrid deterministic-AI triage system with DCSL-based emergency detection achieved zero RED under-triage and consistent RED detection across all 11 official languages. The 16.7% over-triage rate falls within published South African SATS ranges (13.1-49%). A single two-level under-triage event was observed on an isiXhosa burns vignette (ORANGEGREEN) and is discussed in Limitations. Findings are preliminary; prospective validation against independent nurse triage is the necessary next step.